Search results for "recursive function"
showing 10 items of 26 documents
Enumerable classes of total recursive functions: Complexity of inductive inference
1994
This paper includes some results on complexity of inductive inference for enumerable classes of total recursive functions, where enumeration is considered in more general meaning than usual recursive enumeration. The complexity is measured as the worst-case mindchange (error) number for the first n functions of the given class. Three generalizations are considered.
On the Intrinsic Complexity of Learning
1995
AbstractA new view of learning is presented. The basis of this view is a natural notion of reduction. We prove completeness and relative difficulty results. An infinite hierarchy of intrinsically more and more difficult to learn concepts is presented. Our results indicate that the complexity notion captured by our new notion of reduction differs dramatically from the traditional studies of the complexity of the algorithms performing learning tasks.
Learning by the Process of Elimination
2002
AbstractElimination of potential hypotheses is a fundamental component of many learning processes. In order to understand the nature of elimination, herein we study the following model of learning recursive functions from examples. On any target function, the learning machine has to eliminate all, save one, possible hypotheses such that the missing one correctly describes the target function. It turns out that this type of learning by the process of elimination (elm-learning, for short) can be stronger, weaker or of the same power as usual Gold style learning.While for usual learning any r.e. class of recursive functions can be learned in all of its numberings, this is no longer true for el…
Co-learning of recursive languages from positive data
1996
The present paper deals with the co-learnability of enumerable families L of uniformly recursive languages from positive data. This refers to the following scenario. A family L of target languages as well as hypothesis space for it are specified. The co-learner is fed eventually all positive examples of an unknown target language L chosen from L. The target language L is successfully co-learned iff the co-learner can definitely delete all but one possible hypotheses, and the remaining one has to correctly describe L.
General inductive inference types based on linearly-ordered sets
1996
In this paper, we reconsider the definitions of procrastinating learning machines. In the original definition of Freivalds and Smith [FS93], constructive ordinals are used to bound mindchanges. We investigate the possibility of using arbitrary linearly ordered sets to bound mindchanges in a similar way. It turns out that using certain ordered sets it is possible to define inductive inference types more general than the previously known ones. We investigate properties of the new inductive inference types and compare them to other types.
On the reducibility of function classes
1972
Copy of a paper published 1972 in Russian.
Probabilistic versus deterministic memory limited learning
1995
Kolmogorov numberings and minimal identification
1995
Identification of programs for computable functions from their graphs by algorithmic devices is a well studied problem in learning theory. Freivalds and Chen consider identification of ‘minimal’ and ‘nearly minimal’ programs for functions from their graphs. To address certain problems in minimal identification for Godel numberings, Freivalds later considered minimal identification in Kolmogorov Numberings. Kolmogorov numberings are in some sense optimal numberings and have some nice properties. We prove certain hierarchy results for minimal identification in every Kolmogorov numbering. In addition we also compare minimal identification in Godel numbering versus minimal identification in Kol…
Memory limited inductive inference machines
1992
The traditional model of learning in the limit is restricted so as to allow the learning machines only a fixed, finite amount of memory to store input and other data. A class of recursive functions is presented that cannot be learned deterministically by any such machine, but can be learned by a memory limited probabilistic leaning machine with probability 1.
On the intrinsic complexity of learning
1995
A new view of learning is presented. The basis of this view is a natural notion of reduction. We prove completeness and relative difficulty results. An infinite hierarchy of intrinsically more and more difficult to learn concepts is presented. Our results indicate that the complexity notion captured by our new notion of reduction differs dramatically from the traditional studies of the complexity of the algorithms performing learning tasks.